Linearly Solvable Optimal Control

نویسندگان

  • K. Dvijotham
  • E. Todorov
چکیده

We summarize the recently-developed framework of linearly-solvable stochastic optimal control. Using an exponential transformation, the (Hamilton-Jacobi) Bellman equation for such problems can bemade linear, giving rise to efficient numericalmethods. Extensions to game theory are also possible and lead to linear Isaacs equations. The key restriction that makes a stochastic optimal control problem linearly-solvable is that the noise and the controls must act in the same subspace. Apart from being linearly solvable, problems in this class have a number of unique properties including: path-integral interpretation of the exponentiated value function; compositionality of optimal control laws; duality with Bayesian inference; trajectory-based Maximum Principle for stochastic control. Development of a general class of more easily solvable problems tends to accelerate progress – as linear systems theory has done. The new framework may have similar impact in fields where stochastic optimal control is relevant.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nonlinear Stochastic Control and Information Theoretic Dualities: Connections, Interdependencies and Thermodynamic Interpretations

In this paper, we present connections between recent developments on the linearly-solvable stochastic optimal control framework with early work in control theory based on the fundamental dualities between free energy and relative entropy. We extend these connections to nonlinear stochastic systems with non-affine controls by using the generalized version of the Feynman–Kac lemma. We present alt...

متن کامل

A Unified Theory of Linearly Solvable Optimal Control

We present a unified theory of Linearly Solvable Optimal Control, that is, a class of optimal control problems whose solution reduces to solving a linear equation (for finite state spaces) or a linear integral equation (for continuous state spaces). The framework presented includes all previous work on linearly solvable optimal control as special cases. It includes both standard control problem...

متن کامل

The δ − sensitivity and

We provide optimal control laws by using tools from stochastic calculus of variations and the mathematical concept of δ−sensitivity. The analysis relies on logarithmic transformations of the value functions and the use of linearly solvable Partial Differential Equations(PDEs). We derive the corresponding optimal control as a function of the δ−sensitivity of the logarithmic transformation of the...

متن کامل

Hierarchical Linearly-Solvable Markov Decision Problems

We present a hierarchical reinforcement learning framework that formulates each task in the hierarchy as a special type of Markov decision process for which the Bellman equation is linear and has analytical solution. Problems of this type, called linearly-solvable MDPs (LMDPs) have interesting properties that can be exploited in a hierarchical setting, such as efficient learning of the optimal ...

متن کامل

Optimal intelligent control for glucose regulation

This paper introduces a novel control methodology based on fuzzy controller for a glucose-insulin regulatory system of type I diabetes patient. First, in order to incorporate knowledge about patient treatment, a fuzzy logic controller is employed for regulating the gains of the basis Proportional-Integral (PI) as a self-tuning controller. Then, to overcome the key drawback of fuzzy logic contro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012